4 research outputs found
Multifidelity Modeling for Physics-Informed Neural Networks (PINNs)
Multifidelity simulation methodologies are often used in an attempt to
judiciously combine low-fidelity and high-fidelity simulation results in an
accuracy-increasing, cost-saving way. Candidates for this approach are
simulation methodologies for which there are fidelity differences connected
with significant computational cost differences. Physics-informed Neural
Networks (PINNs) are candidates for these types of approaches due to the
significant difference in training times required when different fidelities
(expressed in terms of architecture width and depth as well as optimization
criteria) are employed. In this paper, we propose a particular multifidelity
approach applied to PINNs that exploits low-rank structure. We demonstrate that
width, depth, and optimization criteria can be used as parameters related to
model fidelity, and show numerical justification of cost differences in
training due to fidelity parameter choices. We test our multifidelity scheme on
various canonical forward PDE models that have been presented in the emerging
PINNs literature
Neural Operator Learning for Ultrasound Tomography Inversion
Neural operator learning as a means of mapping between complex function
spaces has garnered significant attention in the field of computational science
and engineering (CS&E). In this paper, we apply Neural operator learning to the
time-of-flight ultrasound computed tomography (USCT) problem. We learn the
mapping between time-of-flight (TOF) data and the heterogeneous sound speed
field using a full-wave solver to generate the training data. This novel
application of operator learning circumnavigates the need to solve the
computationally intensive iterative inverse problem. The operator learns the
non-linear mapping offline and predicts the heterogeneous sound field with a
single forward pass through the model. This is the first time operator learning
has been used for ultrasound tomography and is the first step in potential
real-time predictions of soft tissue distribution for tumor identification in
beast imaging.Comment: 4 pages, 1 figur
A Metalearning Approach for Physics-Informed Neural Networks (PINNs): Application to Parameterized PDEs
Physics-informed neural networks (PINNs) as a means of discretizing partial
differential equations (PDEs) are garnering much attention in the Computational
Science and Engineering (CS&E) world. At least two challenges exist for PINNs
at present: an understanding of accuracy and convergence characteristics with
respect to tunable parameters and identification of optimization strategies
that make PINNs as efficient as other computational science tools. The cost of
PINNs training remains a major challenge of Physics-informed Machine Learning
(PiML) - and, in fact, machine learning (ML) in general. This paper is meant to
move towards addressing the latter through the study of PINNs on new tasks, for
which parameterized PDEs provides a good testbed application as tasks can be
easily defined in this context. Following the ML world, we introduce
metalearning of PINNs with application to parameterized PDEs. By introducing
metalearning and transfer learning concepts, we can greatly accelerate the
PINNs optimization process. We present a survey of model-agnostic metalearning,
and then discuss our model-aware metalearning applied to PINNs as well as
implementation considerations and algorithmic complexity. We then test our
approach on various canonical forward parameterized PDEs that have been
presented in the emerging PINNs literature